14 research outputs found
Visualizing Robot Intent for Object Handovers with Augmented Reality
Humans are very skillful in communicating their intent for when and where a
handover would occur. On the other hand, even the state-of-the-art robotic
implementations for handovers display a general lack of communication skills.
We propose visualizing the internal state and intent of robots for
Human-to-Robot Handovers using Augmented Reality. Specifically, we visualize 3D
models of the object and the robotic gripper to communicate the robot's
estimation of where the object is and the pose that the robot intends to grasp
the object. We conduct a user study with 16 participants, in which each
participant handed over a cube-shaped object to the robot 12 times. Results
show that visualizing robot intent using augmented reality substantially
improves the subjective experience of the users for handovers and decreases the
time to transfer the object. Results also indicate that the benefits of
augmented reality are still present even when the robot makes errors in
localizing the object.Comment: 6 pages, 4 Figures, 2 Table
Object-Independent Human-to-Robot Handovers using Real Time Robotic Vision
We present an approach for safe and object-independent human-to-robot
handovers using real time robotic vision and manipulation. We aim for general
applicability with a generic object detector, a fast grasp selection algorithm
and by using a single gripper-mounted RGB-D camera, hence not relying on
external sensors. The robot is controlled via visual servoing towards the
object of interest. Putting a high emphasis on safety, we use two perception
modules: human body part segmentation and hand/finger segmentation. Pixels that
are deemed to belong to the human are filtered out from candidate grasp poses,
hence ensuring that the robot safely picks the object without colliding with
the human partner. The grasp selection and perception modules run concurrently
in real-time, which allows monitoring of the progress. In experiments with 13
objects, the robot was able to successfully take the object from the human in
81.9% of the trials.Comment: IEEE Robotics and Automation Letters (RA-L). Preprint Version.
Accepted September, 2020. The code and videos can be found at
https://patrosat.github.io/h2r_handovers
Object-Independent Human-to-Robot Handovers Using Real Time Robotic Vision
We present an approach for safe, and object-independent human-to-robot handovers using real time robotic vision, and manipulation. We aim for general applicability with a generic object detector, a fast grasp selection algorithm, and by using a single gripper-mounted RGB-D camera, hence not relying on external sensors. The robot is controlled via visual servoing towards the object of interest. Putting a high emphasis on safety, we use two perception modules: human body part segmentation, and hand/finger segmentation. Pixels that are deemed to belong to the human are filtered out from candidate grasp poses, hence ensuring that the robot safely picks the object without colliding with the human partner. The grasp selection, and perception modules run concurrently in real-time, which allows monitoring of the progress. In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.</p
Embodied gesture interaction for immersive maps
International audienceWith the increasing availability of head-mounted displays for virtual reality and augmented reality, we can create immersive maps in which the user is closer to the data. Embodiment is a key concept, allowing the user to act upon virtual objects in an immersive environment. Our work explores the use of embodied interaction for immersive maps. We propose four design considerations for embodied maps and embodied gesture interaction with immersive maps: object presence, consistent physics, human body skills, and direct manipulation. We present an example of an immersive flow map with a series of novel embodied gesture interactions, which adhere to the proposed design considerations. The embodied interactions allow users to directly manipulate immersive flow maps and explore origin-destination flow data in novel ways. Authors of immersive maps can use the four proposed design considerations for creating embodied gesture interactions. The discussed example interactions apply to diverse types of immersive maps and will hopefully incite others to invent more embodied interactions for immersive maps
Visibility Maximization Controller for Robotic Manipulation
Occlusions caused by a robot's own body is a common problem for closed-loop control methods employed in eye-To-hand camera setups. We propose an optimization-based reactive controller that minimizes self-occlusions while achieving a desired goal pose. The approach allows coordinated control between the robot's base, arm and head by encoding the line-of-sight visibility to the target as a soft constraint along with other task-related constraints, and solving for feasible joint and base velocities. The generalizability of the approach is demonstrated in simulated and real-world experiments, on robots with fixed or mobile bases, with moving or fixed objects, and multiple objects. The experiments revealed a trade-off between occlusion rates and other task metrics. While a planning-based baseline achieved lower occlusion rates than the proposed controller, it came at the expense of highly inefficient paths and a significant drop in the task success. On the other hand, the proposed controller is shown to improve visibility to the target object(s) without sacrificing too much from the task success and efficiency. Videos and code can be found at: rhys-newbury.github.io/projects/vmc/. </p
Object-Independent Human-to-Robot Handovers Using Real Time Robotic Vision
Object-Independent Human-to-Robot Handovers Using Real Time Robotic Visio
Demonstrating Cloth Folding to Robots: Design and Evaluation of a 2D and a 3D User Interface
Demonstrating Cloth Folding to Robots: Design and Evaluation of a 2D and a 3D User Interfac